200 research outputs found

    Federated Edge Learning : Design Issues and Challenges

    Full text link
    Federated Learning (FL) is a distributed machine learning technique, where each device contributes to the learning model by independently computing the gradient based on its local training data. It has recently become a hot research topic, as it promises several benefits related to data privacy and scalability. However, implementing FL at the network edge is challenging due to system and data heterogeneity and resources constraints. In this article, we examine the existing challenges and trade-offs in Federated Edge Learning (FEEL). The design of FEEL algorithms for resources-efficient learning raises several challenges. These challenges are essentially related to the multidisciplinary nature of the problem. As the data is the key component of the learning, this article advocates a new set of considerations for data characteristics in wireless scheduling algorithms in FEEL. Hence, we propose a general framework for the data-aware scheduling as a guideline for future research directions. We also discuss the main axes and requirements for data evaluation and some exploitable techniques and metrics.Comment: Submitted to IEEE Network Magazin

    How Far Can We Go in Compute-less Networking: Computation Correctness and Accuracy

    Full text link
    Emerging applications such as augmented reality and tactile Internet are compute-intensive and latency-sensitive, which hampers their running in constrained end devices alone or in the distant cloud. The stringent requirements of such application drove to the realization of Edge computing in which computation is offloaded near to users. Compute-less networking is an extension of edge computing that aims at reducing computation and abridging communication by adopting in-network computing and computation reuse. Computation reuse aims to cache the result of computations and use them to perform similar tasks in the future and, therefore, avoid redundant calculations and optimize the use of resources. In this paper, we focus on the correctness of the final output produced by computation reuse. Since the input might not be identical but similar, the reuse of previous computation raises questions about the accuracy of the final results. To this end, we implement a proof of concept to study and gauge the effectiveness and efficiency of computation reuse. We are able to reduce task completion time by up to 80% while ensuring high correctness. We further discuss open challenges and highlight future research directions.Comment: Accepted for publication by the IEEE Network Magazin

    Toward a Wired Ad Hoc Nanonetwork

    Full text link
    Nanomachines promise to enable new medical applications, including drug delivery and real time chemical reactions' detection inside the human body. Such complex tasks need cooperation between nanomachines using a communication network. Wireless Ad hoc networks, using molecular or electromagnetic-based communication have been proposed in the literature to create flexible nanonetworks between nanomachines. In this paper, we propose a Wired Ad hoc NanoNETwork (WANNET) model design using actin-based nano-communication. In the proposed model, actin filaments self-assembly and disassembly is used to create flexible nanowires between nanomachines, and electrons are used as carriers of information. We give a general overview of the application layer, Medium Access Control (MAC) layer and a physical layer of the model. We also detail the analytical model of the physical layer using actin nanowire equivalent circuits, and we present an estimation of the circuit component's values. Numerical results of the derived model are provided in terms of attenuation, phase and delay as a function of the frequency and distances between nanomachines. The maximum throughput of the actin-based nanowire is also provided, and a comparison between the maximum throughput of the proposed WANNET, vs other proposed approaches is presented. The obtained results prove that the proposed wired ad hoc nanonetwork can give a very high achievable throughput with a smaller delay compared to other proposed wireless molecular communication networks.Comment: submitted to IEEE International Conference on Communications 2020 (ICC 2020

    User Trajectory Prediction in Mobile Wireless Networks Using Quantum Reservoir Computing

    Get PDF
    This paper applies a quantum machine learning technique to predict mobile users' trajectories in mobile wireless networks using an approach called quantum reservoir computing (QRC). Mobile users' trajectories prediction belongs to the task of temporal information processing and it is a mobility management problem that is essential for self-organizing and autonomous 6G networks. Our aim is to accurately predict the future positions of mobile users in wireless networks using QRC. To do so, we use a real-world time series dataset to model mobile users' trajectories. The QRC approach has two components: reservoir computing (RC) and quantum computing (QC). In RC, the training is more computational-efficient than the training of simple recurrent neural networks (RNN) since, in RC, only the weights of the output layer are trainable. The internal part of RC is what is called the reservoir. For the RC to perform well, the weights of the reservoir should be chosen carefully to create highly complex and nonlinear dynamics. The QC is used to create such dynamical reservoir that maps the input time series into higher dimensional computational space composed of dynamical states. After obtaining the high-dimensional dynamical states, a simple linear regression is performed to train the output weights and thus the prediction of the mobile users' trajectories can be performed efficiently. In this paper, we apply a QRC approach based on the Hamiltonian time evolution of a quantum system. We simulate the time evolution using IBM gate-based quantum computers and we show in the experimental results that the use of QRC to predict the mobile users' trajectories with only a few qubits is efficient and is able to outperform the classical approaches such as the long short-term memory (LSTM) approach and the echo-state networks (ESN) approach.Comment: 10 pages, 12 figures, 1 table. This paper is a preprint of a paper submitted to IET Quantum Communication. If accepted, the copy of record will be available at the IET Digital Librar

    Multi-access edge computing: A survey

    Get PDF
    Multi-access Edge Computing (MEC) is a key solution that enables operators to open their networks to new services and IT ecosystems to leverage edge-cloud benefits in their networks and systems. Located in close proximity from the end users and connected devices, MEC provides extremely low latency and high bandwidth while always enabling applications to leverage cloud capabilities as necessary. In this article, we illustrate the integration of MEC into a current mobile networks' architecture as well as the transition mechanisms to migrate into a standard 5G network architecture.We also discuss SDN, NFV, SFC and network slicing as MEC enablers. Then, we provide a state-of-the-art study on the different approaches that optimize the MEC resources and its QoS parameters. In this regard, we classify these approaches based on the optimized resources and QoS parameters (i.e., processing, storage, memory, bandwidth, energy and latency). Finally, we propose an architectural framework for a MEC-NFV environment based on the standard SDN architecture
    corecore